Maarten Sap

I am an assistant professor at CMU's LTI department with a courtesy appointment in HCII, and a part-time research scientist at the Allen Institute for AI (AI2). My research focuses on endowing NLP systems with social intelligence and social commonsense, and understanding social inequality and bias in language.

Before this, I was a Postdoc/Young Investigator at the Allen Institute for AI (AI2), working on project Mosaic. I received my PhD from the University of Washington where I was advised by Noah Smith and Yejin Choi, and have interned at AI2 working on social commonsense reasoning, and at Microsoft Research working on deep learning models for understanding human cognition.
[bio for talks]

Recent updates:

January 2025 πŸ‘¨πŸΌβ€πŸ«πŸ§ : Happy to give a talk in Artificial Social Intelligence at the Cluster of Excellence "Science of Intelligence" (SCIoI) at the Technische UniversitΓ€t Berlin.

January 2025 πŸ‘¨πŸΌβ€πŸ«πŸ“’: I'm happy to be giving a talk at the First Workshop on Multilingual Counterspeech Generation at COLING 2025 (remotely)!

December 2024 πŸ‡¨πŸ‡¦β›°οΈ: Excited to be attending my very first NeurIPS conference in Vancouver BC! I'll be giving a talk at New in ML at 3pm on Tuesday!

November 2024 : I received a Google Academic Research Award for our work on participatory impact assessment of future AI use cases.

November 2024 πŸ«‚πŸ‘¨β€πŸ«: Very excited that I now have a courtesy appointment in the Human Computer Interaction Institute!

November 2024 πŸ”πŸ§‘β€πŸŽ“: As a reminder, due to my lab being quite full already, I'm not taking any students in this upcoming PhD application cycle 😟.

November 2024 πŸ–οΈπŸ“š: Excited to give a talk at the 6th Workshop on Narrative Understanding on Computational Methods of Social Causes and Effects of Stories.

[older news]


My research group:

Dan Chechelnitsky

LTI PhD student
co-advised with Chrysoula Zerva

Joel Mire

LTI MLT student

Karina Halevy

LTI PhD student
co-advised with Mona Diab

Jimin Mun

LTI PhD student

Jocelyn Shen

MIT PhD student
co-advised with Cynthia Breazeal

Akhila Yerukola

LTI PhD student

Mingqian Zheng

LTI PhD student
co-advised with Carolyn RosΓ©

Xuhui Zhou

LTI PhD student


Overarching Research Themes

*Extracted by GPT-4, there may be inconsistencies.* #### *Ethical AI and Cultural Sensitivity* My research group explores the intersection of AI ethics and cultural sensitivity in artificial intelligence applications. A key paper in this area is [Mind the Gesture: Evaluating AI Sensitivity to Culturally Offensive Non-Verbal Gestures](https://arxiv.org/abs/2502.17710), which investigates how AI systems interpret non-verbal cues and the implications for diverse user interactions. Another important contribution is [Rejected Dialects: Biases Against African American Language in Reward Models](https://arxiv.org/abs/2502.12858), examining systemic biases in language models that affect AI's understanding of different dialects. Furthermore, we delve into the necessity for diverse perspectives in AI technology through [Diverse Perspectives on AI: Examining People's Acceptability and Reasoning of Possible AI Use Cases](https://arxiv.org/abs/2502.07287). #### *Navigating Narratives in AI* My research group explores the intricacies of narrative analyses and storytelling in digital platforms. We emphasize understanding narrative structures through our work on [Quantifying the narrative flow of imagined versus autobiographical stories](https://www.pnas.org/doi/10.1073/pnas.2211715119), which provides insights into how narratives are perceived differently based on context. Additionally, our investigation into [HEART-felt Narratives: Tracing Empathy and Narrative Style in Personal Stories with LLMs](https://arxiv.org/abs/2405.17633) sheds light on emotional resonance in AI-generated narratives. This theme underscores the importance of empathy and storytelling in enhancing human-AI interactions. #### *Social Intelligence and Agent Interaction* My research group explores the design and evaluation of socially intelligent AI agents capable of nuanced interactions. The paper [SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents](https://arxiv.org/abs/2310.11667) discusses methods for assessing the social reasoning capabilities of these agents. Additionally, [Clever Hans or Neural Theory of Mind? Stress Testing Social Reasoning in Large Language Models](https://arxiv.org/abs/2305.14763) critically evaluates the limits of social understanding in language models. Our work reflects a commitment to realizing AI systems that can engage with human-like understanding and empathy in social contexts. #### *Addressing Toxicity in AI Generated Text* My research group explores mechanisms to identify and mitigate toxic language in AI outputs. In our study [PolygloToxicityPrompts: Multilingual Evaluation of Neural Toxic Degeneration in Large Language Models](https://arxiv.org/abs/2405.09373), we evaluate biases in multilingual contexts degrading the performance and safety of AI applications. We also delve into [Beyond Denouncing Hate: Strategies for Countering Implied Biases and Stereotypes in Language](https://arxiv.org/abs/2311.00161), focusing on the subtle biases present in online communication. This research aims to develop frameworks that prioritize safe and equitable interactions in AI systems.